Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Biomed Tech (Berl) ; 2024 Mar 18.
Artículo en Inglés | MEDLINE | ID: mdl-38491745

RESUMEN

OBJECTIVES: In this study, we developed a machine learning approach for postoperative corneal endothelial cell images of patients who underwent Descemet's membrane keratoplasty (DMEK). METHODS: An AlexNet model is proposed and validated throughout the study for endothelial cell segmentation and cell location determination. The 506 images of postoperative corneal endothelial cells were analyzed. Endothelial cell detection, segmentation, and determining of its polygonal structure were identified. The proposed model is based on the training of an R-CNN to locate endothelial cells. Next, by determining the ridges separating adjacent cells, the density and hexagonality rates of DMEK patients are calculated. RESULTS: The proposed method reached accuracy and F1 score rates of 86.15 % and 0.857, respectively, which indicates that it can reliably replace the manual detection of cells in vivo confocal microscopy (IVCM). The AUC score of 0.764 from the proposed segmentation method suggests a satisfactory outcome. CONCLUSIONS: A model focused on segmenting endothelial cells can be employed to assess the health of the endothelium in DMEK patients.

2.
PeerJ Comput Sci ; 10: e1818, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38435576

RESUMEN

Teaching computer architecture (Comp-Arch) courses in undergraduate curricula is becoming more of a challenge as most students prefer software-oriented courses. In some computer science/engineering departments, Comp-Arch courses are offered without the lab component due to resource constraints and differing pedagogical priorities. This article demonstrates how students working in teams are motivated to study the Comp-Arch course and how instructors can increase student motivation and knowledge by taking advantage of hands-on practices. The teams are asked to design and implement a 16-bit MIPS-like processor with constraints as a specific instruction set, and limited data and instruction memory. Student projects include following three phases, namely, design, desktop simulator implementation, and verification using hardware description language (HDL). In the design phase, teams develop their Comp-Arch to implement specified instructions. A range of designs resulted, e.g., (a) a processor with extensive user-defined instructions resulting in longer cycle times (b) a processor with a minimal instruction set but with a faster clock cycle time. Next, teams developed a desktop simulator in any programming language to execute instructions on the architecture. Finally, students engage in Verilog Hardware Description Language (HDL) projects to simulate and verify the data-path designed during the initial phase. Student feedback and their current understanding of the project were collected through a questionnaire featuring varying Likert scale questions, some with a ten-point scale, and others with a five-point scale. Results of the survey show that the hands-on approach increases students' motivation and knowledge in the Comp-Arch course, which is centered around computer system design principles. This approach can also be effectively extended to related courses, such as Microprocessor Design, which delves into the intricacies of creating and implementing microprocessors or central processing units (CPUs) at the hardware level. Furthermore, the present study demonstrates that interactions, specifically through peer reviews and public presentations, between students in each phase increases their knowledge and perspective on designing custom processors.

3.
Sensors (Basel) ; 23(20)2023 Oct 15.
Artículo en Inglés | MEDLINE | ID: mdl-37896570

RESUMEN

In this paper, a novel feature generator framework is proposed for handwritten digit classification. The proposed framework includes a two-stage cascaded feature generator. The first stage is based on principal component analysis (PCA), which generates projected data on principal components as features. The second one is constructed by a partially trained neural network (PTNN), which uses projected data as inputs and generates hidden layer outputs as features. The features obtained from the PCA and PTNN-based feature generator are tested on the MNIST and USPS datasets designed for handwritten digit sets. Minimum distance classifier (MDC) and support vector machine (SVM) methods are exploited as classifiers for the obtained features in association with this framework. The performance evaluation results show that the proposed framework outperforms the state-of-the-art techniques and achieves accuracies of 99.9815% and 99.9863% on the MNIST and USPS datasets, respectively. The results also show that the proposed framework achieves almost perfect accuracies, even with significantly small training data sizes.

4.
Sensors (Basel) ; 23(2)2023 Jan 15.
Artículo en Inglés | MEDLINE | ID: mdl-36679800

RESUMEN

This article investigates and discusses challenges in the telecommunication field from multiple perspectives, both academic and industry sides are catered for, surveying the main points of technological transformation toward edge-cloud continuum from the view of a telco operator to show the complete picture, including the evolution of cloud-native computing, Software-Defined Networking (SDN), and network automation platforms. The cultural shift in software development and management with DevOps enabled the development of significant technologies in the telecommunication world, including network equipment, application development, and system orchestration. The effect of the aforementioned cultural shift to the application area, especially from the IoT point of view, is investigated. The enormous change in service diversity and delivery capabilities to mass devices are also discussed. During the last two decades, desktop and server virtualization has played an active role in the Information Technology (IT) world. With the use of OpenFlow, SDN, and Network Functions Virtualization (NFV), the network revolution has got underway. The shift from monolithic application development and deployment to micro-services changed the whole picture. On the other hand, the data centers evolved in several generations where the control plane cannot cope with all the networks without an intelligent decision-making process, benefiting from the AI/ML techniques. AI also enables operators to forecast demand more accurately, anticipate network load, and adjust capacity and throughput automatically. Going one step further, zero-touch networking and service management (ZSM) is proposed to get high-level human intents to generate a low-level configuration for network elements with validated results, minimizing the ratio of faults caused by human intervention. Harmonizing all signs of progress in different communication technologies enabled the use of edge computing successfully. Low-powered (from both energy and processing perspectives) IoT networks have disrupted the customer and end-point demands within the sector, as such paved the path towards devising the edge computing concept, which finalized the whole picture of the edge-cloud continuum.


Asunto(s)
Nube Computacional , Tecnología , Humanos , Automatización , Industrias , Tecnología de la Información
5.
Biomed Tech (Berl) ; 67(3): 151-159, 2022 Jun 27.
Artículo en Inglés | MEDLINE | ID: mdl-35470642

RESUMEN

Epilepsy is a neurological disorder requiring specialists to scrutinize medical data at diagnosis. Diagnosis stage is both time consuming and challenging, requiring expertise in detection of epileptic seizures from multi-channel noisy EEG data. It is crucial that EEG signals be automatically classified in order to help experts detect epileptic seizures correctly. In this study, a novel hybrid deep learning and SVM technique is employed on a restructured EEG data. EEG signals were transformed into a two-dimensional image sequence. Clough-Tocher technique is employed for interpolation of the values obtained from the electrodes placed on the skull during EEG measurements in order to estimate the signal strength in the missing places over the picture. After the parameters in the deep learning architecture were optimized on the validation data, it is observed that the proposed technique's performance for classifying epilepsy moments over EEG signals demonstrated unmatched performance. This study fills a gap in the literature in terms of demonstrating a superior performance in automatic detection of epileptic episodes on a benchmark EEG data set and takes a substantial leap towards fully automated detection of epileptic disorders.


Asunto(s)
Electroencefalografía , Epilepsia , Algoritmos , Electroencefalografía/métodos , Epilepsia/diagnóstico , Humanos , Convulsiones/diagnóstico , Procesamiento de Señales Asistido por Computador
6.
Sensors (Basel) ; 22(7)2022 Mar 23.
Artículo en Inglés | MEDLINE | ID: mdl-35408066

RESUMEN

Recent developments in telecommunication world have allowed customers to share the storage and processing capabilities of their devices by providing services through fast and reliable connections. This evolution, however, requires building an incentive system to encourage information exchange in future telecommunication networks. In this study, we propose a mechanism to share bandwidth and processing resources among subscribers using smart contracts and a blockchain-based incentive mechanism, which is used to encourage subscribers to share their resources. We demonstrate the applicability of the proposed method through two use cases: (i) exchanging multimedia data and (ii) CPU sharing. We propose a universal user-to-user and user-to-operator payment system, named TelCash, which provides a solution for current roaming problems and establishes trust in X2X communications. TelCash has a great potential in solving the charges of roaming and reputation management (reliance) problems in telecommunications sector. We also show, by using a simulation study, that encouraging D2D communication leads to a significant increase in content quality, and there is a threshold after which downloading from base station is dramatically cut down and can be kept as low as 10%.

7.
Sensors (Basel) ; 21(23)2021 Nov 30.
Artículo en Inglés | MEDLINE | ID: mdl-34884012

RESUMEN

This paper investigates and proposes a solution for Protocol Independent Switch Architecture (PISA) to process application layer data, enabling the inspection of application content. PISA is a novel approach in networking where the switch does not run any embedded binary code but rather an interpreted code written in a domain-specific language. The main motivation behind this approach is that telecommunication operators do not want to be locked in by a vendor for any type of networking equipment, develop their own networking code in a hardware environment that is not governed by a single equipment manufacturer. This approach also eases the modeling of equipment in a simulation environment as all of the components of a hardware switch run the same compatible code in a software modeled switch. The novel techniques in this paper exploit the main functions of a programmable switch and combine the streaming data processor to create the desired effect from a telecommunication operator perspective to lower the costs and govern the network in a comprehensive manner. The results indicate that the proposed solution using PISA switches enables application visibility in an outstanding performance. This ability helps the operators to remove a fundamental gap between flexibility and scalability by making the best use of limited compute resources in application identification and the response to them. The experimental study indicates that, without any optimization, the proposed solution increases the performance of application identification systems 5.5 to 47.0 times. This study promises that DPI, NGFW (Next-Generation Firewall), and such application layer systems which have quite high costs per unit traffic volume and could not scale to a Tbps level, can be combined with PISA to overcome the cost and scalability issues.

8.
PeerJ Comput Sci ; 7: e538, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34084935

RESUMEN

In the construction of effective and scalable overlay networks, publish/subscribe (pub/sub) network designers prefer to keep the diameter and maximum node degree of the network low. However, existing algorithms are not capable of simultaneously decreasing the maximum node degree and the network diameter. To address this issue in an overlay network with various topics, we present herein a heuristic algorithm, called the constant-diameter minimum-maximum degree (CD-MAX), which decreases the maximum node degree and maintains the diameter of the overlay network at two as the highest. The proposed algorithm based on the greedy merge algorithm selects the node with the minimum number of neighbors. The output of the CD-MAX algorithm is enhanced by applying a refinement stage through the CD-MAX-Ref algorithm, which further improves the maximum node degrees. The numerical results of the algorithm simulation indicate that the CD-MAX and CD-MAX-Ref algorithms improve the maximum node-degree by up to 64% and run up to four times faster than similar algorithms.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...